Skip to content

Include mtmd files for publishing in llama-cpp-sys-2 #806

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

babichjacob
Copy link
Contributor

@babichjacob babichjacob commented Aug 18, 2025

Trying to use multimodality added in #744 (thank you everyone who contributed!), the repository can build successfully locally, but not as a dependency of another crate (going through crates.io).

Most relevant part of build error: CMake Error at CMakeLists.txt:206 (add_subdirectory): The source directory ... /llama-cpp-sys-2-0.1.118/llama.cpp/tools does not contain a CMakeLists.txt file.
  CMAKE_TOOLCHAIN_FILE_x86_64-pc-windows-msvc = None
  CMAKE_TOOLCHAIN_FILE_x86_64_pc_windows_msvc = None
  HOST_CMAKE_TOOLCHAIN_FILE = None
  CMAKE_TOOLCHAIN_FILE = None
  CMAKE_GENERATOR_x86_64-pc-windows-msvc = None
  CMAKE_GENERATOR_x86_64_pc_windows_msvc = None
  HOST_CMAKE_GENERATOR = None
  CMAKE_GENERATOR = None
  CMAKE_PREFIX_PATH_x86_64-pc-windows-msvc = None
  CMAKE_PREFIX_PATH_x86_64_pc_windows_msvc = None
  HOST_CMAKE_PREFIX_PATH = None
  CMAKE_PREFIX_PATH = None
  CMAKE_x86_64-pc-windows-msvc = None
  CMAKE_x86_64_pc_windows_msvc = None
  HOST_CMAKE = None
  CMAKE = None
  running: "cmake" "C:\\Users\\Jacob\\scoop\\persist\\rustup\\.cargo\\registry\\src\\index.crates.io-1949cf8c6b5b557f\\llama-cpp-sys-2-0.1.118\\llama.cpp" "-G" "Visual Studio 17 2022" "-Thost=x64" "-Ax64" "-DLLAMA_BUILD_TESTS=OFF" "-DLLAMA_BUILD_EXAMPLES=OFF" "-DLLAMA_BUILD_SERVER=OFF" "-DLLAMA_BUILD_TOOLS=OFF" "-DLLAMA_CURL=OFF" "-DLLAMA_BUILD_COMMON=ON" "-DLLAMA_BUILD_TOOLS=ON" "-DCMAKE_BUILD_PARALLEL_LEVEL=12" "-DBUILD_SHARED_LIBS=OFF" "-DGGML_OPENMP=ON" "-DCMAKE_INSTALL_PREFIX=R:\\target\\debug\\build\\llama-cpp-sys-2-ec27c7a4781527a7\\out" "-DCMAKE_C_FLAGS= /O2 /DNDEBUG /Ob2 -nologo -MD -Brepro" "-DCMAKE_C_FLAGS_RELEASE= /O2 /DNDEBUG /Ob2 -nologo -MD -Brepro" "-DCMAKE_CXX_FLAGS= /O2 /DNDEBUG /Ob2 -nologo -MD -Brepro" "-DCMAKE_CXX_FLAGS_RELEASE= /O2 /DNDEBUG /Ob2 -nologo -MD -Brepro" "-DCMAKE_ASM_FLAGS= -nologo -MD -Brepro" "-DCMAKE_ASM_FLAGS_RELEASE= -nologo -MD -Brepro" "-DCMAKE_BUILD_TYPE=Release"
  -- Selecting Windows SDK version 10.0.26100.0 to target Windows 10.0.22631.
  -- The C compiler identification is MSVC 19.44.35214.0
  -- The CXX compiler identification is MSVC 19.44.35214.0
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Found Git: C:/Users/Jacob/scoop/shims/git.exe (found version "2.50.1.windows.1")
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
  -- Looking for pthread_create in pthreads
  -- Looking for pthread_create in pthreads - not found
  -- Looking for pthread_create in pthread
  -- Looking for pthread_create in pthread - not found
  -- Found Threads: TRUE
  -- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF.
  -- CMAKE_SYSTEM_PROCESSOR: AMD64
  -- CMAKE_GENERATOR_PLATFORM: x64
  -- GGML_SYSTEM_ARCH: x86
  -- Including CPU backend
  -- Found OpenMP_C: -openmp (found version "2.0")
  -- Found OpenMP_CXX: -openmp (found version "2.0")
  -- Found OpenMP: TRUE (found version "2.0")
  -- x86 detected
  -- Performing Test HAS_AVX_1
  -- Performing Test HAS_AVX_1 - Success
  -- Performing Test HAS_AVX2_1
  -- Performing Test HAS_AVX2_1 - Success
  -- Performing Test HAS_FMA_1
  -- Performing Test HAS_FMA_1 - Success
  -- Performing Test HAS_AVX512_1
  -- Performing Test HAS_AVX512_1 - Failed
  -- Performing Test HAS_AVX512_2
  -- Performing Test HAS_AVX512_2 - Failed
  -- Adding CPU backend variant ggml-cpu: /arch:AVX2 GGML_AVX2;GGML_FMA;GGML_F16C
  -- ggml version: 0.0.0
  -- ggml commit:  unknown
  -- Configuring incomplete, errors occurred!

--- stderr
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
CMake Warning at common/CMakeLists.txt:32 (message):
Git repository not found; to enable automatic generation of build info,
make sure Git is installed and the project is a Git repository.

CMake Error at CMakeLists.txt:206 (add_subdirectory):
The source directory

  C:/Users/Jacob/scoop/persist/rustup/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/llama-cpp-sys-2-0.1.118/llama.cpp/tools

does not contain a CMakeLists.txt file.

thread 'main' panicked at C:\Users\Jacob\scoop\persist\rustup.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\cmake-0.1.54\src\lib.rs:1119:5:

command did not execute successfully, got: exit code: 1

build script failed, must exit now
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

This fixes that by including the file for publishing in llama-cpp-sys-2's Cargo.toml file.

Will continue testing to see if other files like tools/mtmd/CMakeLists.txt also need to be included.

@babichjacob babichjacob marked this pull request as draft August 19, 2025 00:01
@babichjacob babichjacob changed the title Include llama.cpp/tools/CMakeLists.txt for publishing in llama-cpp-sys-2 Include mtmd files for publishing in llama-cpp-sys-2 Aug 19, 2025
@babichjacob
Copy link
Contributor Author

Paused work on this after a few minutes once

  CMake Error at tools/CMakeLists.txt:17 (add_subdirectory):
    add_subdirectory given source "batched-bench" which is not an existing
    directory.


  CMake Error at tools/CMakeLists.txt:18 (add_subdirectory):
    add_subdirectory given source "gguf-split" which is not an existing
    directory.


  CMake Error at tools/CMakeLists.txt:19 (add_subdirectory):
    add_subdirectory given source "imatrix" which is not an existing directory.


  CMake Error at tools/CMakeLists.txt:20 (add_subdirectory):
    add_subdirectory given source "llama-bench" which is not an existing
    directory.


  CMake Error at tools/CMakeLists.txt:21 (add_subdirectory):
    add_subdirectory given source "main" which is not an existing directory.


  CMake Error at tools/CMakeLists.txt:22 (add_subdirectory):
    add_subdirectory given source "perplexity" which is not an existing
    directory.


  CMake Error at tools/CMakeLists.txt:23 (add_subdirectory):
    add_subdirectory given source "quantize" which is not an existing
    directory.


  CMake Error at tools/CMakeLists.txt:27 (add_subdirectory):
    add_subdirectory given source "run" which is not an existing directory.


  CMake Error at tools/CMakeLists.txt:28 (add_subdirectory):
    add_subdirectory given source "tokenize" which is not an existing
    directory.


  CMake Error at tools/CMakeLists.txt:29 (add_subdirectory):
    add_subdirectory given source "tts" which is not an existing directory.


  CMake Error at tools/CMakeLists.txt:36 (add_subdirectory):
    add_subdirectory given source "cvector-generator" which is not an existing
    directory.


  CMake Error at tools/CMakeLists.txt:37 (add_subdirectory):
    add_subdirectory given source "export-lora" which is not an existing
    directory.

all these directories that need to be included showed up and I started to wonder if the entire llama.cpp repository should be included or if we should continue figuring out what precisely is needed.

(I'll come back - energy allowing - later)

@MarcusDunn
Copy link
Contributor

all these directories that need to be included showed up and I started to wonder if the entire llama.cpp repository should be included or if we should continue figuring out what precisely is needed.

We used to do this but ran into crates.io upload size limits.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants